![]() MECHANISM AND METHOD FOR ACCESSING DATA IN A SHARED MEMORY
专利摘要:
A mechanism and method for allowing at least one client (40) to access data in a shared memory (22) includes allocating data present in the shared memory (22), the memory (22) being configured in a plurality of buffers (36), and access to the data by a client (40) or a server (50) without data locking or data access limitation. 公开号:FR3025908A1 申请号:FR1558367 申请日:2015-09-09 公开日:2016-03-18 发明作者:Gregory Reed Sykes;Christian Reynolds Decker 申请人:GE Aviation Systems LLC; IPC主号:
专利说明:
[0001] 1 Mechanism and method for accessing data in shared memory Ground-handling equipment (LRU) is a modular element of a larger assembly such as a vehicle or an aircraft and is designed according to specifications to ensure that it can be exchanged and / or replaced in case of breakdown. For example, the LRUs of an aircraft may include fully autonomous systems, sensors, radios, and other ancillary equipment for managing and / or performing functions of the aircraft. In the environment of an aircraft, the LRUs may be designed to operate according to particular operating, interoperability and / or scale factor criteria such as those defined by the ARINC series standards. [0002] A plurality of LRUs may be connected by a data network to access data or exchange data in a shared, shared memory of a flight control computer or other computer system. The flight control computer or other computer system may further manage and / or perform functions of the aircraft. In a first embodiment, a mechanism for allowing at least one client to access data in shared memory, the mechanism includes allocating data present in the shared memory to at least one subject, the assignment being accessible by a predetermined constant address, the subject (s) having a certain number of buffers, the number of buffers being equal to a number of clients accessing the subject (s), plus two for each server accessing ( x) subject (s), each client and server having an active access pointer and an active access pointer director to direct active access pointers to buffers according to a transaction request emanating from a client or server. A buffer always contains the most recent data in the shared memory, and at least one buffer is still available to access data in the shared memory. In addition, the active access pointers are allocated among the buffers by the active access pointer director only through machine assembly language transactions without copying the data at an operating system level. In another embodiment, a method for allowing at least one client to access data in shared memory, the method includes allocating shared memory data to at least one subject, allocating a only one predetermined address to access each at least one subject, assigning a number of buffers for each at least one subject, equal to the number of clients accessing the subject (s), plus two for each server accessing the subject (s), and responding to transaction requests from at least one of the clients or servers by assigning an active access pointer for each respective client or server to a buffer. Access to the data is through the buffer, without copying the data at an operating system level. [0003] The invention will be better understood from the detailed study of some embodiments taken by way of nonlimiting examples and illustrated by the appended drawings in which: FIG. 1 is a schematic view of a data communication network; an aircraft, according to one embodiment of the invention; FIG. 2 is a schematic view of clients accessing the buffers of a subject, according to one embodiment of the invention; Figure 3 is a schematic view of a mechanism allowing customers to access the most recent data in a buffer memory according to one embodiment of the invention; and FIG. 4 is a schematic view of a mechanism for clients and a server to perform a read / write transaction on buffered data, in accordance with one embodiment of the invention. The embodiments of the present invention described are illustrated in the context of an aircraft having a plurality of sensors, systems, software components and / or hardware components of the aircraft all operating on a single system accessing a common memory directly. or shared. However, embodiments of the invention can be implemented in any context using clients and servers accessing shared or single shared memory. On the other hand, although "clients" and "servers" are referred to hereinafter, the particular embodiments described are non-limiting examples of clients and servers. In addition, although a "client" is described, any component or "consumer" of data in the shared memory may be included. Likewise, although a "server" is described, any component of a "producer" of data for the shared memory may be included. Additional examples of clients and servers may include remote or local discrete units, applications, computer processes, processing threads, etc., or any combination thereof, that access shared memory. For example, a plurality of "clients" may all reside in the same computer or processing unit, accessing common random access memory (RAM). Figure 1 is a schematic illustration of a data communication system 24 according to one embodiment of the invention. One or more computer threads or processes 26, each comprising one or more clients 18, access by communication to a shared memory 22, represented as a shared RAM. In addition, one or more computer threads or processes 28 may each include one or more servers 20, also having access to the shared memory 22. In this sense, each process 26, 28, client 18, and server 20 may have access to the server. In addition, although some processes 26, 28 are illustrated as having only one respective client 18 or respective servers 20, an embodiment of the invention may include processes 26, 28 which include a combination of clients 18 and / or servers 20 in a single process 26, 28. Although a server 20 is described, embodiments of the invention may include any computer system, a computer system running an ARINC 653 operating system, a flight management system, an on-board computer, etc. Memory 23 may include random access memory (RAM), flash memory or one or more different types of portable electronic memory, etc., or any suitable combination of these types of memory. The clients 18 and / or the servers 20 may cooperate with the memory 22 so that the clients 18 and / or the servers 20, or any computer programs or processes in them, can access the 3025908 a part of the memory 22 (for example the "shared memory" 22). For purposes of this description, "programs" and / or "processes" may comprise all or part of a computer program having an executable instruction set for controlling the management and / or operation of at least one of the respective client 18, the respective server 20 or respective functions of an aircraft. The program and / or the processes may include a computer program that may include computer-readable media for executing or computer executing instructions or data structures stored thereon. These computer-readable media may be any existing media accessible to a versatile or specific computer or to another machine with a processor. Overall, such a computer program may include routines, programs, objects, components, data structures, algorithms, etc., which have the technical effect of performing particular tasks or implementing types of particular abstract data. Computer executable instructions, corresponding data structures and programs are examples of program code for executing the information exchange as presented herein. The computer executable instructions may include, for example, instructions and data, which cause a multipurpose computer, a specific computer, a controller, or a specific processing machine to perform a certain function or group of functions. The data communication network 24 shown in FIG. 1 is only a schematic representation of one embodiment of the invention and serves to illustrate that a plurality of clients 18 and servers 20 can be in the same computer system of the aircraft. The exact location of the clients 18 and the servers 20 does not relate to the embodiments of the invention. In addition, a larger or smaller number of clients 18 and / or servers 20 may be included in embodiments of the invention. The communication network 24 may comprise a system bus or other communication components of a computer system to facilitate interconnection for communications between the clients 18 and the servers 20. In addition, the configuration and operation of the network 24 may be defined by a common set of standards or regulations applicable to particular aeronautical contexts. [0004] The memory 22 is further shown to include a data assignment to at least one group, or "subject" 32, placed at a predetermined constant addressable location of the memory, or "constant address" 34 of the memory 22. At the Within the meaning of the present description, a "subject" may comprise a predetermined subset of memory 22 allocated for a particular data storage use for the aircraft. For example, a single subject 32 may comprise a single data assignment, including the aircraft's own speed, or may include a plurality of data elements with or without relation to each other, including waypoints or the plane. flight in progress. The subjects 32 may be sequentially organized starting from the constant address 34, especially in the form of a single link list; however, additional subject organization structures 32 may be designed to include matrices, variable assignments for each subject 32, etc., all starting from the location of the constant address 34. Each of process 26, 28, and / or respectively clients 18 and servers 20 is previously configured to include the predetermined constant address 34 of the shared memory 22. In this sense, each process 26, 28, client 18 and / or server 20 is preconfigured to identify the location of the constant address 34 and, therefore, the subject or subjects 32 whose data must be accessible. For purposes of this description, each client 18 and / or client process 26 may be considered a "client" to access data in shared memory 22, and each server 20 and / or server process 28 may be considered a "server" to access data in shared memory 22. [0005] In one embodiment of the invention, the number of subjects 32 in the shared memory 22 is predefined during the initialization of the memory 22, according to a known number of subjects 32 accessible to the clients and / or servers. In another embodiment of the invention, the number of subjects 32 is defined at the time of or during execution by the collective number of subjects 32 accessible to clients and / or servers. In this sense, the number of subjects 32 can be dynamic, increasing and decreasing according to the needs, or only additional when it is necessary to access additional subjects 32. [0006] Referring now to Figure 2, each subject 32 further comprises a plurality of buffers 36 adapted to store a predetermined amount of data needed for a particular data item. For example, a subject 32 for accessing the aircraft's own speed may have a plurality of buffers 36, each designed to store eight bytes. [0007] In another example, a subject 32 for accessing the current flight plan may have a plurality of buffers 36, each designed to store a thousand bytes. By way of illustration, the plurality of buffers 36 are shown as having different classification states, especially occupied 44, unoccupied 46 and containing the most recent data 48. Each state will be explained in more detail later. Each subject 32 is furthermore shown as comprising a control and / or direction member such as a director 38 of 10 active access pointers. The active access pointer director 38 directs access to the plurality of buffers 36 based on a data transaction request, which will be explained in more detail later. Other possible embodiments of the invention may include a separate or remote director 38 of 15 active access pointers, for example a controller or a processor, located at a distance from the subject 32. As shown schematically, one or more clients 40, each comprising an active access pointer 42, is / are able to access a specific buffer 36 identified by the respective active access pointer 42. In addition, one or more servers 50, each comprising an active access pointer 52, is / are able to access a specific buffer memory 36 identified by the respective active access pointer 52. As illustrated, a first client 54 and a second client 56 are respectively associated with a first buffer 58 and a second buffer 60. In this way, the first and second buffers 58, 60 have been identified as buffers occupied. A third client 62 is shown, not associated with the subject 32, as is the server 50. Although each of the active access pointers 42, 52 is shown as being part of the clients 40 or servers 50 respectively, Embodiments of the invention may include active access pointers 42, 52 belonging to subject 32 and / or buffers 36. [0008] In one embodiment of the invention, the number of buffers 36 in each subject 32 and the size of each buffer 36 are predefined during the initialization of shared memory 22, based on a known number of clients. and / or servers 50 capable of accessing the subject 32. In another embodiment of the invention, the number of buffers 36 in each subject 32 is defined at the time of execution or during execution by the a collective number of clients 40 and servers 50 then accessing the subject 32. In this sense, the number of buffers 36 can be dynamic, increasing or decreasing as needed, or only additional when additional clients 40 and / or servers In addition, embodiments of the invention may include a definition of the buffers 36 in a manner similar to the definition of the subjects 32, for example in the predefined manner. ending the subjects 32 and the buffers 36 when initializing the shared memory 22, or in different ways, for example by pre topics 32 while the buffers are defined in a dynamic way. Regardless of the embodiment described, the total number of buffers 36 may be equal to the number of clients accessing topic 32, plus two buffers 36 for each server 50 accessing subject 32. Referring now to FIG. there is shown a mechanism for accessing data in the subject 32 and / or the buffer 36 of the shared memory 22. A third client 3025908 communicates with the subject 32 and communicates with the pointer director 38. active access of the subject (represented as dashed communication 64) to request a transaction with the data. The active access pointer director 38 responds to the third client 62 by identifying a third buffer 66 that contains the latest data 48 of the subject 32. The third client 62, then directed to the third buffer 66, requests its active access pointer 42 to go to the third buffer 66 (illustrated as a second communication 68). At this point, the third client 62 accesses the data stored in the third buffer 66 (the most recent data 48) and performs the desired transaction on the data. The active access pointer director 38 may direct the active access pointers 42, 52 of the client (s) 40 or the server 50 to a particular buffer 36 depending on the specific transaction requested. For example, the transaction may include at least one of an extraction of the data stored in the buffer 36 (i.e., a "read-only"), an extraction of the data stored in the buffer 36 and a entering new data in the buffer 36 based on a processing or calculation of the extracted data, writing new data to the buffer memory 36 based on data provided by the client (i.e. "read / write"), and / or a new data entry of the server 52 in the buffer 36 with a new data transfer instruction, for example, in another part of the shared memory 22, so that are visible and / or accessible to the client (s) 40 (that is, a "storage"). In one example, a "storage" transaction can identify the data transferred as the most recent data 48. In an example of the mechanism for accessing data in the subject 32 and / or the memory buffer 36 When shared 22, one or more clients 40 in communication with the subject 32 to request a read-only transaction may each be assigned the same buffer, such as the third buffer 66, which contains the most recent data. 32. Since, in this case, none of the clients will change the data, there will be no collisions or data integrity issues that will be available to customers. In this way, the read-only clients 40 can perform their transactions asynchronously with each other without interference. As explained, the read-only client report to which a buffer buffer 36 is assigned is not necessarily one to one, it may be a large number for one. After the read-only clients 40 have completed their respective transactions, they can break the communication with their respective buffer 36 until another transaction is requested. At the time of the second transaction request, the mechanism is repeated so that the client 40 can access the most recent data 48, identified by the director 38 of active access pointers, which data can be the same data in the same buffer 36, or new data in the same buffer 36 or a different buffer 36. The exemplary mechanism described above can be illustrated in FIG. 4, built on the mechanism shown in FIG. In the present example, the server 50 had executed a read / write transaction in the first buffer 58, with the logged data being referred to as the newest "new" data 48. As shown, when the server 50 terminates the transaction read / write, the server 50 couples the communication with the first buffer 58 and communicates with the director 38 of active access pointers on the first buffer 58 contains "new" most recent data 48 (the communication illustrated as dashed communication 72). The active access pointer director 38 in turn identifies the first buffer 58 as containing the most recent data 48 and will then route the clients 40 opening new communications to the most recent data 48 of the first buffer 58 As also shown, if the server 50 requests a new read / write transaction, the active access pointer director 38 optionally copies the most recent data 48 of the first buffer memory into the fourth buffer memory 70 and directs the pointer 52 of active access of the server 50 to the fourth buffer memory 70 to perform the new read / write transaction. [0009] When any server 50 performing a transaction concerning a buffer 36 has completed its transaction, regardless of the type of transaction, the server 50 may optionally provide the director 38 of active access pointers with an instruction for the transaction to be completed. The director 38 of active access pointers can, in this sense, keep track of which buffers 36 are then used and are then accessed. If the server 50 requests an additional transaction, the server communicates with the active access pointer director 38, which allocates an unoccupied buffer 46 with which the new transaction will be completed. [0010] Although this example illustrates operations of the server 50, the clients 40 may be able to perform similar read transactions. In addition, embodiments of the invention may include clients 40 and / or servers 50 capable of performing similar read or read / write transactions described herein. In this sense, the server 50 may sometimes act as if it were a client 40 and a client 40 may sometimes act as if it were a server 50. However, there are some differences between the operations of the client 40 and the server 10 50. For example, although multiple read-only clients 40 can simultaneously access a single buffer 36, only one server 50 can access both a single buffer 36 at the same time. In another example, although the manager 38 of active access pointers can direct the active access pointer 42 of a client 40 to a buffer containing the most recent data 48 for a transaction, the director 38 of active access pointers will direct the pointer active access 52 of a server 50 only to an idle buffer 46, and never to the buffer 48 of the most recent data, in order to prevent any data corruption among the most recent data 48. The mechanism described above is arranged and configured so that one of the buffers 36 of the subject 32 is always identified by the director 38 of active access pointers as containing the most recent data 48 so that the client or clients 40 and 40 or more than 50 servers accesses it. In addition, the mechanism described above can be designed so that each client 40 performing a transaction on the data accessible to the subject 32 is granted access to the most recent data 48 at the time the client 40 requests the transaction. If more recent data is identified during a current transaction of customer 40, customer 40 terminates the transaction on the most recent data 48 at the time of the requested transaction. In other words, the most recent data 48 can only be confirmed or guaranteed at the time of the request for the transaction and not during or at the time of the completion of the transaction. The mechanisms described above can only operate using machine assembly language transactions, without copying data at a design level beyond the machine assembly language, especially without copying the data at the system level. operating system (eg "zero copy"). The embodiments described above have the technical effect that the zero copy operation is performed by directing the clients 40 and / or the servers 50, using active access pointers 42, 52, to buffers 36 containing the most recent data 48, so that the most recent data 48 is never "locked" or "blocked" by preventing access to other clients 40 and servers 50. In addition, use of a machine assembly language permits "atomic exchange" operations of the pointers, in which the update is completed in a single atomic cycle of operation, and therefore can not be interrupted by further updates of the pointers active access because other updates can not be completed during a shorter operating cycle than the atomic exchange. [0011] By using machine assembly language instructions and basic data structures (e.g., single link lists, base pointers), the mechanisms provide asynchronous inter-process data communications between at least one server. 50 and at least one client 40, 30 in a shared memory 22, using a zero-copy data exchange, allowing "no-lock" or "no-lock" access for accessible data without complex configuration of priority of a process, nor the phenomenon of "priority inversion", in which a lower priority 5 process executing a pre-access locks the data and does not "let go" of them to become accessible even if a process of a higher priority level requires access. In fact, since operations using machine instructions tend to "the first one accessing the data to gain", higher priority level processes can still be the first to perform their operations. Embodiments of the invention may further utilize the mechanisms described above by performing programming of Application Programming Interfaces (APIs) to access mechanisms at an operating system level ( or at application level, etc.) through APIs. This has the technical effect that the embodiments described above allow the zero copy method to prevent data lock, data blocking and / or priority reversal. An additional achievable advantage in the embodiments The above is that the embodiments described above prevent the system from malfunctioning as a result of data copying operations at a language level different from the machine language. Data copying operations can take a long time to read and / or write requests for large files. By using pointers and pointer exchanges, additional copies can be avoided while ensuring access to all components that need access to the data. Another advantage of the embodiments described above includes an integrated mechanism for overwriting the oldest data present in the buffers, and in that it therefore requires no type of data management method for the "collection of data. garbage". In addition, conventional data sharing between a server and one or more clients is accomplished by creating global data storage and protecting it with semaphores (i.e., control values). such as locked / unlocked indicators), for example at the level of an operating system, any other mutex or protection against data locking (eg data interruptions, etc.), then copying the data, which can be very expensive in terms of machine time, especially if the data stored is large. This allows, as described herein, more efficient, faster, non-locking access operations. Other achievable advantages in the embodiments described above include the fact that the subject type has the flexibility to loose process coupling, requires little coordination, and does not require "ladder start" (c). that is, processes, a client and / or servers can intervene at any time). In addition, the implementation of the APIs described above may result in lower debugging costs for system debugging, and greater performance margins on similar hardware as compared to copying methods. different. Insofar as this is not already described, the various aspects and structures of the various embodiments can be used in combination with each other at will. The fact that one aspect may not be illustrated in all embodiments does not mean that it should be interpreted as not possible, but this is not intended to make the description more concise. Thus, the various aspects of the various embodiments can be mixed and adapted as desired to form new embodiments, regardless of whether the new embodiments are expressly described or not. All combinations or permutations of details described herein are covered by this disclosure. [0012] 3025908 18 List of marks 18 Ground-handling equipment (LRU) 20 Server 5 22 Memory 24 Data communication network 26 Processes in the LRU 28 Methods in the server 30 Data allocation 10 32 Subject 34 Constant address 36 Plurality of buffers 38 Active Access Pointer Manager 40 Clients 15 42 Active Pointer 44 Busy Buffer 46 Unoccupied Buffer 48 Most Recent Buffer Data 50 Server 20 52 Active Pointer 54 First Client 56 Second Client 58 First Memory buffer 60 Second buffer 25 62 Third client 64 First communication 66 Third buffer 68 Second communication 70 Fourth buffer 30 72 Third communication
权利要求:
Claims (15) [0001] REVENDICATIONS1. System for enabling at least one client (40) to access data in a shared memory (22), characterized in that it comprises: an allocation of data present in the shared memory (22) to at least one subject (32), the allocation being accessible by a predetermined constant address (34); the subject (s) (32) having a number of buffers (36), the number of buffers (36) being equal to a number of clients (40) accessing the subject (s) (32) ), plus two for each server (50) accessing the subject (s) (32); each client (40) and each server (50) having an active access pointer (42, 52); and a director (38) of active access pointers for directing the active access pointers (42, 52) to buffers (36) based on a transaction request from a client (40) or a server (50); characterized in that a buffer (36) always includes the most recent data (48) in the shared memory (22), and at least one buffer (36) is still available to access data in the shared memory (22); and the active access pointers (42, 52) being allocated among the buffers (36) by the director (38) of active access pointers only using machine assembly language transactions without copying the data at the level of an operating system. [0002] 2. System according to claim 1, the system being a flight management system. 3025908 21 [0003] The system of claim 1, wherein a plurality of subjects (32) are distributed in the shared memory (22). [0004] The system of claim 1, wherein the at least one subject (32) and the number of buffers (36) are predefined during initialization of the shared memory (22). [0005] The system of claim 1, wherein the subject (s) (32) and / or the number of buffers (36) are defined during execution by a collective number of clients (40) and servers ( 50) accessing the subject (s) (32). 10 [0006] The system of claim 1, wherein the client (40) and / or the server (50) accesses (s) the data associated with the buffer (36) to which active access pointers are directed (42, 52). [0007] The system of claim 6, wherein the active access pointer director (38), in response to a terminated transaction request, directs active access pointers (42, 52) for new transactions to a transaction. different buffer (36) containing the most recent data. [0008] A method for enabling at least one client (40) to access data in a shared memory (22), the method comprising: assigning data present in the shared memory (22) in at least one subject (32); assigning a single predetermined address (34) to access each at least one subject (32); allocating a number of buffers (36) for each at least one subject (32), equal to the number of clients (40) accessing the subject (s) (32), plus two for each server accessing the subject (s); and responding to transaction requests from at least one of the clients (40) or servers (50) by assigning to a buffer (36) an active access pointer (42, 52) for each client respective access (40) or server (50) accessing the data via the buffer (36) without copying data at the operating system level. [0009] The method of claim 8, wherein accessing the data through the buffer (36) prevents any data lock. [0010] The method of claim 8, wherein allocating the data in at least one subject (32), assigning a unique predetermined address (34) and assigning the number of buffers (36) to each subject (32) takes place during the initialization of the shared memory (22). [0011] The method of claim 8, wherein assigning the data in at least one subject (32) and / or assigning the number of buffers (36) for each at least one subject (32) occurs during the execution according to the collective number of clients (40) and servers (50) accessing the at least one subject (32). [0012] The method of claim 8, wherein the response to the transaction requests further comprises the direction of the active access pointer (42) for each respective client (40) to a buffer (36) containing the data of the same. most recent (48) shared memory (22). [0013] 13. The method of claim 12, further comprising the realization, by the clients (40) and / or the servers (50), of a transaction on the data to which it has been accessed. 3025908 23 [0014] The method of claim 13, wherein performing a transaction includes retrieving the data and / or writing new data to the buffer (36) and / or storing the data of the buffer memory in the memory. shared memory (22). [0015] The method of claim 14, further comprising, in response to a completed transaction request, updating the direction of the active access pointer (42, 52) for each respective client (40) or each server. respective (50) to a different buffer (36) containing the most recent data (48).
类似技术:
公开号 | 公开日 | 专利标题 FR3025908B1|2019-07-12|MECHANISM AND METHOD FOR ACCESSING DATA IN A SHARED MEMORY US10628347B2|2020-04-21|Deallocation of memory buffer in multiprocessor systems FR3025907B1|2019-07-26|MECHANISM AND METHOD FOR PROVIDING COMMUNICATION BETWEEN A CLIENT AND A SERVER BY ACCESSING SHARED MEMORY MESSAGE DATA. FR2792087A1|2000-10-13|METHOD FOR IMPROVING THE PERFORMANCE OF A MULTIPROCESSOR SYSTEM INCLUDING A WORK WAITING LINE AND SYSTEM ARCHITECTURE FOR IMPLEMENTING THE METHOD US10223301B2|2019-03-05|Pre-allocating memory buffers by physical processor and using a bitmap metadata in a control program CN106708608A|2017-05-24|Distributed lock service method and acquisition method, and corresponding device US11144213B2|2021-10-12|Providing preferential access to a metadata track in two track writes US10101999B2|2018-10-16|Memory address collision detection of ordered parallel threads with bloom filters CN102902765A|2013-01-30|Method and device for removing file occupation EP3599552A1|2020-01-29|Electronic device and method for installing avionics software applications on a platform comprising a multicore processor, associated computer program and electronic system US20200301756A1|2020-09-24|Deadlock resolution between distributed processes using process and aggregated information US20160371225A1|2016-12-22|Methods for managing a buffer cache and devices thereof KR20160145250A|2016-12-20|Shuffle Embedded Distributed Storage System Supporting Virtual Merge and Method Thereof EP2726985B1|2021-10-06|Device and method for synchronizing tasks executed in parallel on a platform comprising several calculation units US10296523B2|2019-05-21|Systems and methods for estimating temporal importance of data EP2181388A1|2010-05-05|Method for managing the shared resources of a computer system, a module for supervising the implementation of same and a computer system having one such module US9734461B2|2017-08-15|Resource usage calculation for process simulation FR3065301A1|2018-10-19|METHOD AND ELECTRONIC DEVICE FOR VERIFYING PARTITIONING CONFIGURATION, COMPUTER PROGRAM US20210141782A1|2021-05-13|Concurrent update management US20210272012A1|2021-09-02|Method of searching machine learning model for io load prediction in use of versioning information EP2652624B1|2020-03-18|Method, computer program, and device for managing memory access in a numa multiprocessor architecture FR3045866A1|2017-06-23|COMPUTER COMPRISING A MULTI-HEART PROCESSOR AND A CONTROL METHOD US20210255894A1|2021-08-19|Garbage collection work stealing mechanism US11132631B2|2021-09-28|Computerized system and method for resolving cross-vehicle dependencies for vehicle scheduling CN112948501B|2021-08-10|Data analysis method, device and system
同族专利:
公开号 | 公开日 GB201516085D0|2015-10-28| US9794340B2|2017-10-17| GB2532842A|2016-06-01| FR3025908B1|2019-07-12| CN105589754B|2021-05-28| JP2016062608A|2016-04-25| GB2532842B|2018-05-23| BR102015020854A2|2016-03-29| US20160080491A1|2016-03-17| CA2902844A1|2016-03-15| CN105589754A|2016-05-18|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 DE69228297T2|1991-08-06|1999-06-02|Fujitsu Ltd|METHOD AND DEVICE FOR REDUCING THE BLOCKING TIME OF A COMMON BUFFER| US5715447A|1991-08-06|1998-02-03|Fujitsu Limited|Method of and an apparatus for shortening a lock period of a shared buffer| KR0152714B1|1995-12-06|1998-10-15|양승택|Buffer managing method using buffer locking techniques in storage system of multi-user environment| WO2001013229A2|1999-08-19|2001-02-22|Venturcom, Inc.|System and method for data exchange| US20020144010A1|2000-05-09|2002-10-03|Honeywell International Inc.|Communication handling in integrated modular avionics| US7454477B2|2005-05-16|2008-11-18|Microsoft Corporation|Zero-copy transfer of memory between address spaces| US20080148095A1|2006-12-14|2008-06-19|Motorola, Inc.|Automated memory recovery in a zero copy messaging system| CN101296236B|2008-06-12|2011-06-08|北京中星微电子有限公司|Method, system and data client terminal for multi-user real-time access to multimedia data| US8555292B2|2008-06-27|2013-10-08|Microsoft Corporation|Synchronizing communication over shared memory| US8316368B2|2009-02-05|2012-11-20|Honeywell International Inc.|Safe partition scheduling on multi-core processors| US9098462B1|2010-09-14|2015-08-04|The Boeing Company|Communications via shared memory| US9396227B2|2012-03-29|2016-07-19|Hewlett Packard Enterprise Development Lp|Controlled lock violation for data transactions| BR112014031915A2|2012-06-21|2017-06-27|Saab Ab|Method for managing memory access of avionics control system, avionics control system and computer program| US9176872B2|2013-02-25|2015-11-03|Barco N.V.|Wait-free algorithm for inter-core, inter-process, or inter-task communication|US10560542B2|2014-09-15|2020-02-11|Ge Aviation Systems Llc|Mechanism and method for communicating between a client and a server by accessing message data in a shared memory| US9710190B1|2014-09-30|2017-07-18|EMC IP Holding Company LLC|Shared memory| US10140036B2|2015-10-29|2018-11-27|Sandisk Technologies Llc|Multi-processor non-volatile memory system having a lockless flow data path| US10417261B2|2016-02-18|2019-09-17|General Electric Company|Systems and methods for flexible access of internal data of an avionics system| EP3488579A4|2016-07-21|2020-01-22|Baidu.com Times TechnologyCo., Ltd.|Efficient communications amongst computing nodes for operating autonomous vehicles| US10037166B2|2016-08-03|2018-07-31|Ge Aviation Systems Llc|Tracking memory allocation| US10282251B2|2016-09-07|2019-05-07|Sandisk Technologies Llc|System and method for protecting firmware integrity in a multi-processor non-volatile memory system| US20190188278A1|2017-12-15|2019-06-20|Slack Technologies, Inc.|Method, apparatus and computer program product for improving data indexing in a group-based communication platform| JP2019179309A|2018-03-30|2019-10-17|日立オートモティブシステムズ株式会社|Processor| US10785170B2|2018-12-28|2020-09-22|Beijing Voyager Technology Co., Ltd.|Reading messages in a shared memory architecture for a vehicle| WO2020139389A1|2018-12-28|2020-07-02|Didi Research America, Llc|Shared memory architecture for a vehicle| WO2020139393A1|2018-12-28|2020-07-02|Didi Research America, Llc|Message buffer for communicating information between vehicle components| WO2020139396A1|2018-12-28|2020-07-02|Didi Research America, Llc|Writing messages in a shared memory architecture for a vehicle| US10572405B1|2018-12-28|2020-02-25|Didi Research America, Llc|Writing messages in a shared memory architecture for a vehicle| US10747597B2|2018-12-28|2020-08-18|Beijing Voyager Technology Co., Ltd.|Message buffer for communicating information between vehicle components| WO2020139395A1|2018-12-28|2020-07-02|Didi Research America, Llc|Reading messages in a shared memory architecture for a vehicle|
法律状态:
2016-09-26| PLFP| Fee payment|Year of fee payment: 2 | 2017-09-25| PLFP| Fee payment|Year of fee payment: 3 | 2018-08-22| PLFP| Fee payment|Year of fee payment: 4 | 2018-11-02| PLSC| Search report ready|Effective date: 20181102 | 2019-08-20| PLFP| Fee payment|Year of fee payment: 5 | 2020-08-19| PLFP| Fee payment|Year of fee payment: 6 | 2021-08-19| PLFP| Fee payment|Year of fee payment: 7 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US14/486,336|US9794340B2|2014-09-15|2014-09-15|Mechanism and method for accessing data in a shared memory| US14486336|2014-09-15| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|